contact center
AI Knowledge Assist: An Automated Approach for the Creation of Knowledge Bases for Conversational AI Agents
Laskar, Md Tahmid Rahman, Tremblay, Julien Bouvier, Fu, Xue-Yong, Chen, Cheng, TN, Shashi Bhushan
The utilization of conversational AI systems by leveraging Retrieval Augmented Generation (RAG) techniques to solve customer problems has been on the rise with the rapid progress of Large Language Models (LLMs). However, the absence of a company-specific dedicated knowledge base is a major barrier to the integration of conversational AI systems in contact centers. To this end, we introduce AI Knowledge Assist, a system that extracts knowledge in the form of question-answer (QA) pairs from historical customer-agent conversations to automatically build a knowledge base. Fine-tuning a lightweight LLM on internal data demonstrates state-of-the-art performance, outperforming larger closed-source LLMs. More specifically, empirical evaluation on 20 companies demonstrates that the proposed AI Knowledge Assist system that leverages the LLaMA-3.1-8B model eliminates the cold-start gap in contact centers by achieving above 90% accuracy in answering information-seeking questions. This enables immediate deployment of RAG-powered chatbots.
- Europe > Austria > Vienna (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Middle East > Yemen > Amran Governorate > Amran (0.04)
- (6 more...)
Redefining CX with Agentic AI: Minerva CQ Case Study
Agrawal, Garima, De Maria, Riccardo, Davuluri, Kiran, Spera, Daniele, Read, Charlie, Spera, Cosimo, Garrett, Jack, Miller, Don
Despite advances in AI for contact centers, customer experience (CX) continues to suffer from high average handling time (AHT), low first-call resolution, and poor customer satisfaction (CSAT). A key driver is the cognitive load on agents, who must navigate fragmented systems, troubleshoot manually, and frequently place customers on hold. Existing AI-powered agent-assist tools are often reactive driven by static rules, simple prompting, or retrieval-augmented generation (RAG) without deeper contextual reasoning. We introduce Agentic AI goal-driven, autonomous, tool-using systems that proactively support agents in real time. Unlike conventional approaches, Agentic AI identifies customer intent, triggers modular workflows, maintains evolving context, and adapts dynamically to conversation state. This paper presents a case study of Minerva CQ, a real-time Agent Assist product deployed in voice-based customer support. Minerva CQ integrates real-time transcription, intent and sentiment detection, entity recognition, contextual retrieval, dynamic customer profiling, and partial conversational summaries enabling proactive workflows and continuous context-building. Deployed in live production, Minerva CQ acts as an AI co-pilot, delivering measurable improvements in agent efficiency and customer experience across multiple deployments.
- Asia > India (0.04)
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)
Silent Abandonment in Text-Based Contact Centers: Identifying, Quantifying, and Mitigating its Operational Impacts
Castellanos, Antonio, Yom-Tov, Galit B., Goldberg, Yair, Park, Jaeyoung
In the quest to improve services, companies offer customers the option to interact with agents via texting. Such contact centers face unique challenges compared to traditional call centers, as measuring customer experience proxies like abandonment and patience involves uncertainty. A key source of this uncertainty is silent abandonment, where customers leave without notifying the system, wasting agent time and leaving their status unclear. Silent abandonment also obscures whether a customer was served or left. Our goals are to measure the magnitude of silent abandonment and mitigate its effects. Classification models show that 3%-70% of customers across 17 companies abandon silently. In one study, 71.3% of abandoning customers did so silently, reducing agent efficiency by 3.2% and system capacity by 15.3%, incurring $5,457 in annual costs per agent. We develop an expectation-maximization (EM) algorithm to estimate customer patience under uncertainty and identify influencing covariates. We find that companies should use classification models to estimate abandonment scope and our EM algorithm to assess patience. We suggest strategies to operationally mitigate the impact of silent abandonment by predicting suspected silent-abandonment behavior or changing service design. Specifically, we show that while allowing customers to write while waiting in the queue creates a missing data challenge, it also significantly increases patience and reduces service time, leading to reduced abandonment and lower staffing requirements.
- North America > United States (1.00)
- Europe (0.14)
- Asia > Middle East > Israel (0.14)
- Telecommunications (1.00)
- Health & Medicine > Health Care Providers & Services (0.93)
- Information Technology (0.92)
- Energy > Oil & Gas > Upstream (0.47)
RAG based Question-Answering for Contextual Response Prediction System
Veturi, Sriram, Vaichal, Saurabh, Jagadheesh, Reshma Lal, Tripto, Nafis Irtiza, Yan, Nian
Large Language Models (LLMs) have shown versatility in various Natural Language Processing (NLP) tasks, including their potential as effective question-answering systems. However, to provide precise and relevant information in response to specific customer queries in industry settings, LLMs require access to a comprehensive knowledge base to avoid hallucinations. Retrieval Augmented Generation (RAG) emerges as a promising technique to address this challenge. Yet, developing an accurate question-answering framework for real-world applications using RAG entails several challenges: 1) data availability issues, 2) evaluating the quality of generated content, and 3) the costly nature of human evaluation. In this paper, we introduce an end-to-end framework that employs LLMs with RAG capabilities for industry use cases. Given a customer query, the proposed system retrieves relevant knowledge documents and leverages them, along with previous chat history, to generate response suggestions for customer service agents in the contact centers of a major retail company. Through comprehensive automated and human evaluations, we show that this solution outperforms the current BERT-based algorithms in accuracy and relevance. Our findings suggest that RAG-based LLMs can be an excellent support to human customer service representatives by lightening their workload.
- North America > United States > Idaho > Ada County > Boise (0.05)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Georgia > Cobb County (0.04)
- (6 more...)
- Retail (0.67)
- Government (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Question Answering (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)
Purpose-built AI builds better customer experiences
In the bygone era of contact centers, the customer experience was tethered to a singular channel--the phone call. The journey began with a pre-recorded message prompting the customer to press a number corresponding to their query. Today's contact centers have evolved from the confines of just traditional phone calls to multiple channels from emails to social media to chatbots. Customers have access to more business information than ever. But improving the quality of customer experiences means becoming more customer-centric and data-driven and scaling available human representatives for round-the-clock assistance.
Towards Probing Contact Center Large Language Models
Nathan, Varun, Kumar, Ayush, Ingle, Digvijay, Vepa, Jithendra
Fine-tuning large language models (LLMs) with domain-specific instructions has emerged as an effective method to enhance their domain-specific understanding. Yet, there is limited work that examines the core characteristics acquired during this process. In this study, we benchmark the fundamental characteristics learned by contact-center (CC) specific instruction fine-tuned LLMs with out-of-the-box (OOB) LLMs via probing tasks encompassing conversational, channel, and automatic speech recognition (ASR) properties. We explore different LLM architectures (Flan-T5 and Llama), sizes (3B, 7B, 11B, 13B), and fine-tuning paradigms (full fine-tuning vs PEFT). Our findings reveal remarkable effectiveness of CC-LLMs on the in-domain downstream tasks, with improvement in response acceptability by over 48% compared to OOB-LLMs. Additionally, we compare the performance of OOB-LLMs and CC-LLMs on the widely used SentEval dataset, and assess their capabilities in terms of surface, syntactic, and semantic information through probing tasks. Intriguingly, we note a relatively consistent performance of probing classifiers on the set of probing tasks. Our observations indicate that CC-LLMs, while outperforming their out-of-the-box counterparts, exhibit a tendency to rely less on encoding surface, syntactic, and semantic properties, highlighting the intricate interplay between domain-specific adaptation and probing task performance opening up opportunities to explore behavior of fine-tuned language models in specialized contexts.
- North America > Dominican Republic (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (5 more...)
AI Coach Assist: An Automated Approach for Call Recommendation in Contact Centers for Agent Coaching
Laskar, Md Tahmid Rahman, Chen, Cheng, Fu, Xue-Yong, Azizi, Mahsa, Bhushan, Shashi, Corston-Oliver, Simon
In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts using Natural Language Processing (NLP) techniques, it would be possible to quickly determine which calls are most relevant for coaching purposes. In this paper, we present AI Coach Assist, which leverages the pre-trained transformer-based language models to determine whether a given call is coachable or not based on the quality assurance (QA) questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an effective way to recommend calls to the contact center managers that are more likely to contain coachable moments. Our experimental findings demonstrate the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (2 more...)
Silent Abandonment in Contact Centers: Estimating Customer Patience from Uncertain Data
Castellanos, Antonio, Yom-Tov, Galit B., Goldberg, Yair
In the quest to improve services, companies offer customers the opportunity to interact with agents through contact centers, where the communication is mainly text-based. This has become one of the favorite channels of communication with companies in recent years. However, contact centers face operational challenges, since the measurement of common proxies for customer experience, such as knowledge of whether customers have abandoned the queue and their willingness to wait for service (patience), are subject to information uncertainty. We focus this research on the impact of a main source of such uncertainty: silent abandonment by customers. These customers leave the system while waiting for a reply to their inquiry, but give no indication of doing so, such as closing the mobile app of the interaction. As a result, the system is unaware that they have left and waste agent time and capacity until this fact is realized. In this paper, we show that 30%-67% of the abandoning customers abandon the system silently, and that such customer behavior reduces system efficiency by 5%-15%. To do so, we develop methodologies to identify silent-abandonment customers in two types of contact centers: chat and messaging systems. We first use text analysis and an SVM model to estimate the actual abandonment level. We then use a parametric estimator and develop an expectation-maximization algorithm to estimate customer patience accurately, as customer patience is an important parameter for fitting queueing models to the data. We show how accounting for silent abandonment in a queueing model improves dramatically the estimation accuracy of key measures of performance. Finally, we suggest strategies to operationally cope with the phenomenon of silent abandonment.
- North America > United States (0.14)
- Asia > Middle East > Israel (0.04)
- Health & Medicine > Health Care Providers & Services (0.93)
- Information Technology > Services (0.93)
New Report Says 27% of Online Adults Have Used Generative AI but Caveat Emptor
Fifty-seven percent of U.S. adults believe "generative AI will make my daily life better," according to a new report by Dentsu. The survey of 1,000 online adults in the United States also found that 87% of consumers claim to have some awareness of generative AI, and 61% believe they at least somewhat understand the technology. Even more interesting is that 27% of U.S. adults say they have used generative AI, and another 42% are interested in trying the technology. These tremendously positive numbers seem to support the hyperbolic interest in ChatGPT and text-to-image generators such as Midjourney and Stable Diffusion. But, there is more to this story.
Five9 Expands Partnership with Invoca to Provide Deeper Insight into Customer Journeys
Five9, a leading provider of the Intelligent CX Platform and Invoca, a cloud leader in AI conversation intelligence, announced that they have expanded their strategic partnership to deliver a solution that enables deeper insight into real-time data throughout the entire customer journey and brings the contact center and the marketing teams closer together to enable more "fluid" CX. "When our agents have no idea what patients may have researched online before they called, delivering a positive, seamless experience is nearly impossible. Pairing Invoca with Five9 allows us to improve our call routing and tracking, through more accurate and granular attribution, at scale, with greater efficiency than ever before." This customized solution called PreSense, combines the power of Five9 Intelligent CX Platform with Invoca's conversation intelligence technology, giving contact center agents visibility into a caller's digital journey before the call takes place. Insights from PreSense help reduce call handling times, increase contact center productivity, and help enable more fluid customer experiences that flow effortlessly through channels, and between virtual and human agents. For example, an agent could see if a customer researched a specific product or responded to a particular offer online and use this pre-call context to provide tailored recommendations.